595 research outputs found

    The behavior of fund managers with benchmarks

    Get PDF

    Nominal or Real? The Impact of Regional Price Levels on Satisfaction with Life

    Get PDF
    According to economic theory, real income, i.e., nominal income adjusted for purchasing power, should be the relevant source of life satisfaction. Previous work, however, has only studied the impact of inflation adjusted nominal income and not taken into account regional differences in purchasing power. Therefore, we use a novel data set to study how regional price levels affect satisfaction with life. The data set comprises about 7 million data points that are used to construct a price level for each of the 428 administrative districts in Germany. We estimate pooled OLS and ordered probit models that include a comprehensive set of individual level, time-varying and time-invariant control variables as well as control variables that capture district heterogeneity other than the price level. Our results show that higher price levels significantly reduce life satisfaction. Furthermore, we find that a higher price level tends to induce a larger loss in life satisfaction than a corresponding decrease in nominal income. A formal test of neutrality of money, however, does not reject neutrality of money. Our results provide an argument in favor of regional indexation of government transfer payments such as social welfare benefits

    Anderson cross-localization

    Full text link
    We report Anderson localization in two-dimensional optical waveguide arrays with disorder in waveguide separation introduced along one axis of the array, in an uncorrelated fashion for each waveguide row. We show that the anisotropic nature of such disorder induces a strong localization along both array axes. The degree of localization in the cross-axis remains weaker than that in the direction in which disorder is introduced. This effect is illustrated both theoretically and experimentally.Comment: 4 pages, 4 figures, to appear in Optics Letter

    Black coal, thin ice: the discursive legitimisation of Australian coal in the age of climate change

    Get PDF
    AbstractDespite mounting urgency to mitigate climate change, new coal mines have recently been approved in various countries, including in Southeast Asia and Australia. Adani’s Carmichael coal mine project in the Galilee Basin, Queensland (Australia), was approved in June 2019 after 9 years of political contestation. Counteracting global efforts to decarbonise energy systems, this mine will substantially increase Australia’s per capita CO2 emissions, which are already among the highest in the world. Australia’s deepening carbon lock-in can be attributed to the essential economic role played by the coal industry, which gives it structural power to dominate political dynamics. Furthermore, tenacious networks among the traditional mass media, mining companies, and their shareholders have reinforced the politico-economic influence of the industry, allowing the mass media to provide a venue for the industry’s outside lobbying strategies as well as ample backing for its discursive legitimisation with pro-coal narratives. To investigate the enduring symbiosis between the coal industry, business interests, the Australian state, and mainstream media, we draw on natural language processing techniques and systematically study discourses about the coal mine in traditional and social media between 2017 and 2020. Our results indicate that while the mine’s approval was aided by the pro-coal narratives of Queensland’s main daily newspaper, the Courier-Mail, collective public sentiment on Twitter has diverged significantly from the newspaper’s stance. The rationale for the mine’s approval, notwithstanding increasing public contestation, lies in the enduring symbiosis between the traditional economic actors and the state; and yet, our results highlight a potential corner of the discursive battlefield favourable for hosting more diverse arguments.</jats:p

    Cosmoglobe DR1. III. First full-sky model of polarized synchrotron emission from all WMAP and Planck LFI data

    Full text link
    We present the first model of full-sky polarized synchrotron emission that is derived from all WMAP and Planck LFI frequency maps. The basis of this analysis is the set of end-to-end reprocessed Cosmoglobe Data Release 1 sky maps presented in a companion paper, which have significantly lower instrumental systematics than the legacy products from each experiment. We find that the resulting polarized synchrotron amplitude map has an average noise rms of 3.2 μK3.2\,\mathrm{\mu K} at 30 GHz and 2∘2^{\circ} FWHM, which is 30% lower than the recently released BeyondPlanck model that included only LFI+WMAP Ka-V data, and 29% lower than the WMAP K-band map alone. The mean BB-to-EE power spectrum ratio is 0.40±0.020.40\pm0.02, with amplitudes consistent with those measured previously by Planck and QUIJOTE. Assuming a power law model for the synchrotron spectral energy distribution, and using the TT--TT plot method, we find a full-sky inverse noise-variance weighted mean of βs=−3.07±0.07\beta_{\mathrm{s}}=-3.07\pm0.07 between Cosmoglobe DR1 K-band and 30 GHz, in good agreement with previous estimates. In summary, the novel Cosmoglobe DR1 synchrotron model is both more sensitive and systematically cleaner than similar previous models, and it has a more complete error description that is defined by a set of Monte Carlo posterior samples. We believe that these products are preferable over previous Planck and WMAP products for all synchrotron-related scientific applications, including simulation, forecasting and component separation.Comment: 15 pages, 15 figures, submitted to A&

    Cosmoglobe: Towards end-to-end CMB cosmological parameter estimation without likelihood approximations

    Full text link
    We implement support for a cosmological parameter estimation algorithm as proposed by Racine et al. (2016) in Commander, and quantify its computational efficiency and cost. For a semi-realistic simulation similar to Planck LFI 70 GHz, we find that the computational cost of producing one single sample is about 60 CPU-hours and that the typical Markov chain correlation length is ∼\sim100 samples. The net effective cost per independent sample is ∼\sim6 000 CPU-hours, in comparison with all low-level processing costs of 812 CPU-hours for Planck LFI and WMAP in Cosmoglobe Data Release 1. Thus, although technically possible to run already in its current state, future work should aim to reduce the effective cost per independent sample by at least one order of magnitude to avoid excessive runtimes, for instance through multi-grid preconditioners and/or derivative-based Markov chain sampling schemes. This work demonstrates the computational feasibility of true Bayesian cosmological parameter estimation with end-to-end error propagation for high-precision CMB experiments without likelihood approximations, but it also highlights the need for additional optimizations before it is ready for full production-level analysis.Comment: 10 pages, 8 figures. Submitted to A&

    From BeyondPlanck to Cosmoglobe: Preliminary WMAP\mathit{WMAP} Q\mathit Q-band analysis

    Full text link
    We present the first application of the Cosmoglobe analysis framework by analyzing 9-year WMAP\mathit{WMAP} time-ordered observations using similar machinery as BeyondPlanck utilizes for Planck\mathit{Planck} LFI. We analyze only the Q\mathit Q-band (41 GHz) data and report on the low-level analysis process from uncalibrated time-ordered data to calibrated maps. Most of the existing BeyondPlanck pipeline may be reused for WMAP\mathit{WMAP} analysis with minimal changes to the existing codebase. The main modification is the implementation of the same preconditioned biconjugate gradient mapmaker used by the WMAP\mathit{WMAP} team. Producing a single WMAP\mathit{WMAP} Q\mathit Q1-band sample requires 22 CPU-hrs, which is slightly more than the cost of a Planck\mathit{Planck} 44 GHz sample of 17 CPU-hrs; this demonstrates that full end-to-end Bayesian processing of the WMAP\mathit{WMAP} data is computationally feasible. In general, our recovered maps are very similar to the maps released by the WMAP\mathit{WMAP} team, although with two notable differences. In temperature we find a ∼2 μK\sim2\,\mathrm{\mu K} quadrupole difference that most likely is caused by different gain modeling, while in polarization we find a distinct 2.5 μK2.5\,\mathrm{\mu K} signal that has been previously called poorly-measured modes by the WMAP\mathit{WMAP} team. In the Cosmoglobe processing, this pattern arises from temperature-to-polarization leakage from the coupling between the CMB Solar dipole, transmission imbalance, and sidelobes. No traces of this pattern are found in either the frequency map or TOD residual map, suggesting that the current processing has succeeded in modelling these poorly measured modes within the assumed parametric model by using Planck\mathit{Planck} information to break the sky-synchronous degeneracies inherent in the WMAP\mathit{WMAP} scanning strategy.Comment: 11 figures, submitted to A&A. Includes updated instrument model and changes addressing referee comment
    • …
    corecore